141 research outputs found

    Estimation of bicycle level of service for urban indian roads

    Get PDF
    Bicycle level of service (BLOS) methodologies have been developed for suburban and urban as well as for rural road segments. Although, today, the utilitarian bicyclist requires access to suburban, urban, and rural environments to safely travel between home and work. In order to complement BLOS methodologies which incorporate mental stressors along road segments, this study develops a methodology by which BLOS and Bicycle compatibility Index (BCI) can be found out by qualitative analysis. Qualitative analysis deals with real-time human perceptions taking into account the satisfaction level of bicyclists while riding along a road. The satisfaction level of the bicyclist or the compatibility of the road for bicyclists is derived from a survey where bicyclist are asked questions based on their perception about safely, visibility and convenience. The survey is conducted on numerous bicyclists and their view are taken down in the form of ratings. These rating can be represented in a graphical form so as to give a clear picture of satisfaction level of bicyclists with respect to the road compatibility. BCI is computed using inverse variance method and finally BLOS, ranging from LOS-A to LOS-F, is found out. Qualitative analysis though differs from quantitative analysis in terms of its surveyed data, the result of both will differ to a much extent. The BCI identifies which intersection approaches have the maximum priority for bicycle safety improvements within a particular jurisdiction. The model provides traffic planners and others the capability to rate roadways with respect to bicyclists’ level of satisfaction, and can be used in the process of evaluating existing roads, redesigning existing roads or designing new roads

    Real-time data acquisition, transmission and archival framework

    Get PDF
    Most human actions are a direct response to stimuli from their five senses. In the past few decades there has been a growing interest in capturing and storing the information that is obtained from the senses using analog and digital sensors. By storing this data it is possible to further analyze and better understand human perception. While many devices have been created for capturing and storing data, existing software and hardware architectures are aimed towards specialized devices and require expensive high-performance systems. This thesis aims to create a framework that supports capture and monitoring of a variety of sensors and can be scaled to run on low and high-performance systems such as netbooks, laptops and desktop systems. The proposed architecture was tested using aural and visual sensors due to their availability and higher bandwidth requirements compared to other sensors. Four different portable computing devices were used for testing with a varied set of hardware capabilities. On each of the systems the same suite of tests were run to benchmark and analyze CPU, memory, network, and storage usage statistics. From the results it was shown that on all of these platforms capturing data from multiple video, audio and other sensor sources was possible in real-time. Performance was shown to scale based on several factors, but the most important were CPU architecture, network topology and data interfaces used

    Application of Deep Learning in Chemical Processes: Explainability, Monitoring and Observability

    Get PDF
    The last decade has seen remarkable advances in speech, image, and language recognition tools that have been made available to the public through computer and mobile devices’ applications. Most of these significant improvements were achieved by Artificial Intelligence (AI)/ deep learning (DL) algorithms (Hinton et al., 2006) that generally refers to a set of novel neural network architectures and algorithms such as long-short term memory (LSTM) units, convolutional networks (CNN), autoencoders (AE), t-distributed stochastic embedding (TSNE), etc. Although neural networks are not new, due to a combination of relatively novel improvements in methods for training the networks and the availability of increasingly powerful computers, one can now model much more complex nonlinear dynamic behaviour by using complex structures of neurons, i.e. more layers of neurons, than ever before (Goodfellow et al., 2016). However, it is recognized that the training of neural nets of such complex structures requires a vast amount of data. In this sense manufacturing processes are good candidates for deep learning applications since they utilize computers and information systems for monitoring and control thus generating a massive amount of data. This is especially true in pharmaceutical companies such as Sanofi Pasteur, the industrial collaborator for the current study, where large data sets are routinely stored for monitoring and regulatory purposes. Although novel DL algorithms have been applied with great success in image analysis, speech recognition, and language translation, their applications to chemical processes and pharmaceutical processes, in particular, are scarce. The current work deals with the investigation of deep learning in process systems engineering for three main areas of application: (i) Developing a deep learning classification model for profit-based operating regions. (ii) Developing both supervised and unsupervised process monitoring algorithms. (iii) Observability Analysis It is recognized that most empirical or black-box models, including DL models, have good generalization capabilities but are difficult to interpret. For example, using these methods it is difficult to understand how a particular decision is made, which input variable/feature is greatly influencing the decision made by the DL models etc. This understanding is expected to shed light on why biased results can be obtained or why a wrong class is predicted with a higher probability in classification problems. Hence, a key goal of the current work is on deriving process insights from DL models. To this end, the work proposes both supervised and unsupervised learning approaches to identify regions of process inputs that result in corresponding regions, i.e. ranges of values, of process profit. Furthermore, it will be shown that the ability to better interpret the model by identifying inputs that are most informative can be used to reduce over-fitting. To this end, a neural network (NN) pruning algorithm is developed that provides important physical insights on the system regarding the inputs that have positive and negative effect on profit function and to detect significant changes in process phenomenon. It is shown that pruning of input variables significantly reduces the number of parameters to be estimated and improves the classification test accuracy for both case studies: the Tennessee Eastman Process (TEP) and an industrial vaccine manufacturing process. The ability to store a large amount of data has permitted the use of deep learning (DL) and optimization algorithms for the process industries. In order to meet high levels of product quality, efficiency, and reliability, a process monitoring system is needed. The two aspects of Statistical Process Control (SPC) are fault detection and diagnosis (FDD). Many multivariate statistical methods like PCA and PLS and their dynamic variants have been extensively used for FD. However, the inherent non-linearities in the process pose challenges while using these linear models. Numerous deep learning FDD approaches have also been developed in the literature. However, the contribution plots for identifying the root cause of the fault have not been derived from Deep Neural Networks (DNNs). To this end, the supervised fault detection problem in the current work is formulated as a binary classification problem while the supervised fault diagnosis problem is formulated as a multi-class classification problem to identify the type of fault. Then, the application of the concept of explainability of DNNs is explored with its particular application in FDD problem. The developed methodology is demonstrated on TEP with non-incipient faults. Incipient faults are faulty conditions where signal to noise ratio is small and have not been widely studied in the literature. To address the same, a hierarchical dynamic deep learning algorithm is developed specifically to address the issue of fault detection and diagnosis of incipient faults. One of the major drawbacks of both the methods described above is the availability of labeled data i.e. normal operation and faulty operation data. From an industrial point of view, most data in an industrial setting, especially for biochemical processes, is obtained during normal operation and faulty data may not be available or may be insufficient. Hence, we also develop an unsupervised DL approach for process monitoring. It involves a novel objective function and a NN architecture that is tailored to detect the faults effectively. The idea is to learn the distribution of normal operation data to differentiate among the fault conditions. In order to demonstrate the advantages of the proposed methodology for fault detection, systematic comparisons are conducted with Multiway Principal Component Analysis (MPCA) and Multiway Partial Least Squares (MPLS) on an industrial scale Penicillin Simulator. Past investigations reported that the variability in productivity in the Sanofi's Pertussis Vaccine Manufacturing process may be highly correlated to biological phenomena, i.e. oxidative stresses, that are not routinely monitored by the company. While the company monitors and stores a large amount of fermentation data it may not be sufficiently informative about the underlying phenomena affecting the level of productivity. Furthermore, since the addition of new sensors in pharmaceutical processes requires extensive and expensive validation and certification procedures, it is very important to assess the potential ability of a sensor to observe relevant phenomena before its actual adoption in the manufacturing environment. This motivates the study of the observability of the phenomena from available data. An algorithm is proposed to check the observability for the classification task from the observed data (measurements). The proposed methodology makes use of a Supervised AE to reduce the dimensionality of the inputs. Thereafter, a criterion on the distance between the samples is used to calculate the percentage of overlap between the defined classes. The proposed algorithm is tested on the benchmark Tennessee Eastman process and then applied to the industrial vaccine manufacturing process

    RECENT ADVANCEMENT, TECHNOLOGY & APPLICATIONS OF MULTIPLE EMULSIONS

    Get PDF
    Multiple emulsions are complex polydispersed systems where both oil in water and water in oil emulsion exists simultaneously which are stabilized by lipophilic and hydrophilic surfactants respectively. The ratio of these surfactants is important in achieving stable multiple emulsions. Among water-in-oil-in-water (w/o/w) and oil-in-water-in-oil (o/w/o) type multiple emulsions, the former has wider areas of application and hence are studied in great detail. Formulation, preparation techniques and in vitro characterization methods for multiple emulsions are reviewed. Various factors affecting the stability of multiple emulsions and the stabilization approaches with specific reference to w/o/w type multiple emulsions are discussed in detail. Favorable drug release mechanisms and/or rate along with in vivo fate of multiple emulsions make them a versatile carrier. It finds wide range of applications in controlled or sustained drug delivery, targeted delivery, taste masking, bioavailability enhancement, enzyme immobilization, etc. Multiple emulsions have also been employed as intermediate step in the microencapsulation process and are the systems of increasing interest for the oral delivery of hydrophilic drugs, which are unstable in gastrointestinal tract like proteins and peptides. With the advancement in techniques for preparation, stabilization and rheological characterization of multiple emulsions, it will be able to provide a novel carrier system for drugs, cosmetics and pharmaceutical agents. In this review, emphasis is laid down on formulation, stabilization techniques and potential applications of multiple emulsion system

    Explainability: Relevance based Dynamic Deep Learning Algorithm for Fault Detection and Diagnosis in Chemical Processes

    Full text link
    The focus of this work is on Statistical Process Control (SPC) of a manufacturing process based on available measurements. Two important applications of SPC in industrial settings are fault detection and diagnosis (FDD). In this work a deep learning (DL) based methodology is proposed for FDD. We investigate the application of an explainability concept to enhance the FDD accuracy of a deep neural network model trained with a data set of relatively small number of samples. The explainability is quantified by a novel relevance measure of input variables that is calculated from a Layerwise Relevance Propagation (LRP) algorithm. It is shown that the relevances can be used to discard redundant input feature vectors/ variables iteratively thus resulting in reduced over-fitting of noisy data, increasing distinguishability between output classes and superior FDD test accuracy. The efficacy of the proposed method is demonstrated on the benchmark Tennessee Eastman Process.Comment: Under Review. arXiv admin note: text overlap with arXiv:2012.0386
    corecore